AI Act: Around forty associations, including the International Federation of Journalists, denounce a "betrayal of the objectives of the European regulation"

It was supposed to stand on two legs: the protection of fundamental rights and the emergence of an ecosystem based on artificial intelligence. But while it is not yet fully implemented – it will be gradually implemented until 2027 – the AI Act (Artificial Intelligence Act) is already limping along.
At the beginning of August, some restrictive measures were imposed on software publishers, such as a ban on marketing facial recognition tools or selling behavior prediction programs, but real regulation of so-called "high-risk" AI, in education or border control for example, will not take place until 2026.
If this second part of the AI Act was so eagerly awaited, it was mainly because it was supposed to regulate the use of copyright-protected content by so-called generative programs, such as ChatGPT or Google's Gemini.
Rather than producing a binding framework, the European Commission has produced a code of practice to be submitted to tech giants. They are invited to produce a document that will provide " a comprehensive overview of the data used to train a model . The publisher will list the main data collections and explain the other sources used ," the Commission explained in a press release.
This is just a summary of the types of sources most commonly used to train generative AI and feed it daily. Nothing specific or detailed that would allow for proof of copyright infringement.
This is why some forty rights holder organizations from around the world (the International Federation of Journalists, associations bringing together film producers, screenwriters, dubbing actors, translators, composers, book publishers, directors) jointly published a press release last Wednesday denouncing: a " betrayal of the objectives of the European regulation on artificial intelligence . These creators' associations were banking heavily on the AI Act to produce a protective framework from which the rest of the world could have drawn inspiration.
This code of good practice " does not address the fundamental concerns that our sectors have consistently raised ," denounce the rights holders, who deplore " a missed opportunity to ensure meaningful protection of intellectual property rights in the context of the development of generative AI ."
Far from being a compromise, this second part appears to be " for the sole benefit of providers of generative AI models who continually infringe copyright and related rights to train their models, " the organizations denounce.
Reporters Without Borders had already slammed the door on the negotiations and insisted on another point just as crucial as copyright: This "Code does not contain a single concrete provision to combat the proven dangers that AI poses to access to reliable information. Democratic issues cannot be relegated, as they are today, to an appendix. "
RSF wanted citizens' access to reliable information to be considered a fundamental right in the AI Act, in the face of the proliferation of deepfakes , automated fake news sites, or misleading information disseminated via generative AI, led by Elon Musk's Grok . In short, this code of good practice makes no mention of disinformation. As for fundamental risks, concerning the proper conduct of elections for example, they are, as RSF denounces, only mentioned in the appendix.
For the press, the challenge is twofold: a recent survey by the Pew Research Center showed that the summary written by generative AIs like ChatGPT or Gemini often dissuades Internet users from going further . They thus click on the links offered half as often as when querying on a traditional search engine.
That's a similar number of fewer visitors to media sites, which rely on this channel for advertising revenue and subscriptions. As a result, nearly one in five young people under 25 now uses generative AI to get their information .
Yet, well below the initial ambitions, this second part of the AI Act is already too restrictive for many economic players who are calling for a pause in its implementation.
Google is expected to sign this code of conduct, while specifying that " the AI Act and the code risk slowing down (...) the deployment of AI in Europe ." Meta (Facebook), for its part, has announced that it will not even sign it. A spokesperson even declared: " The European Union's inconsistent, restrictive, and counterproductive approach (...) contrasts sharply with President Trump's pro-innovation leadership ."
It must be said that Meta is already facing legal action in Europe. The National Publishing Union, the Société des Gens de Lettres, and the National Union of Authors and Composers have taken the group to court for " counterfeiting " and " economic parasitism ."
Authors' and publishers' organizations believe that the tech giant has exploited, without the authorization of rights holders, " colossal " volumes of copyrighted works to feed its generative AI.
Unlike 90% of French media today, L'Humanité does not depend on large groups or billionaires . This means that:
- We bring you unbiased, uncompromising information . But also that
- We don't have the financial resources that other media outlets have .
Independent, quality information comes at a cost . Pay it. I want to know more.
L'Humanité